Computer Vision | Practise: How Computers See in the Real World
Update: 2025-11-10
Description
In this episode of Big Ideas Only, host Mikkel Svold explores how computers “see” with Andreas Møgelmose (Associate Professor of AI, Aalborg University; Visual Analysis & Perception Lab). We unpack what computer vision is, where it already works at scale, what’s still hard, and the real-world trade-offs around privacy and surveillance - from self-driving cars and robots to hospital X-rays and trash sorting.
In this episode, you’ll learn about:
Episode Content
00:01 Why it matters that computers can “see”
02:04 When vision alone is enough — and when it isn’t
04:40 Healthcare in practice: automated X-ray checks for faster casts and shorter ER waits
05:39 Accuracy, human oversight, and how every case gets double-checked in morning rounds
07:20 The trolley-problem myth: how real autonomous systems minimize risk instead of choosing victims
12:32 Choosing the right approach: classification versus 3D navigation
13:36 Getting depth: stereo vision, lidar, radar, and time-of-flight sensors
16:01 Why sorting mixed, messy waste is still one of the hardest vision problems
18:03 Humanoid robots: balance, stairs, and why sight is the foundation for movement
19:21 Status check: “solved” in some areas, far from it in others
20:40 Privacy and ethics: on-device versus cloud processing, and who controls the data
27:37 What’s still missing: fine-grained recognition, explainability, and machine unlearning
32:28 Current projects: pre-anesthesia screening, color detection in video, and robust segmentation
33:32 Outro and teaser for a deeper theoretical dive next episode
This podcast is produced by Montanus.
In this episode, you’ll learn about:
- What computer vision really is: turning camera input into understanding and action
- When vision alone is enough, and when you need lidar, radar or time-of-flight sensors
- The biggest driver: industrial automation
- How automated triage of X-rays can cut ER waiting times with a doctor reviewing the final result
- Why the classic “who should the car hit?” dilemma misses how real autonomy works
- 3D understanding with stereo cameras and other depth-sensing methods
- Why sorting messy, mixed real-world waste remains one of the hardest vision challenges
- Humanoid robots — what already works and what’s still far from reality
- Where research is headed: from fine-grained recognition to explainability and machine unlearning
- On-device versus cloud processing, and how that choice shapes privacy risk
Episode Content
00:01 Why it matters that computers can “see”
02:04 When vision alone is enough — and when it isn’t
04:40 Healthcare in practice: automated X-ray checks for faster casts and shorter ER waits
05:39 Accuracy, human oversight, and how every case gets double-checked in morning rounds
07:20 The trolley-problem myth: how real autonomous systems minimize risk instead of choosing victims
12:32 Choosing the right approach: classification versus 3D navigation
13:36 Getting depth: stereo vision, lidar, radar, and time-of-flight sensors
16:01 Why sorting mixed, messy waste is still one of the hardest vision problems
18:03 Humanoid robots: balance, stairs, and why sight is the foundation for movement
19:21 Status check: “solved” in some areas, far from it in others
20:40 Privacy and ethics: on-device versus cloud processing, and who controls the data
27:37 What’s still missing: fine-grained recognition, explainability, and machine unlearning
32:28 Current projects: pre-anesthesia screening, color detection in video, and robust segmentation
33:32 Outro and teaser for a deeper theoretical dive next episode
This podcast is produced by Montanus.
Comments
In Channel




